skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "University of Washington, USA"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Existing methodology on food accessibility predominately focuses on on-premise services, that is, dine-in and shopping at stores, which assumes a linear distance decay property (the closer, the higher accessibility). Access to delivery services is fundamentally different from that to on-premise stores. Stores with close proximity (within an inner boundary) are less desirable for delivery due to delivery fees, and there is an outer boundary beyond which deliveries are unavailable, both challenging the assumption of increasing impediment with distance. These two boundaries form a donut shape for delivery services. We propose a modified 2-step floating catchment area method that incorporates the donut shape, accounts for both demand and supply, and examines the diversity of food options. Using Seattle as a case study, our results show that delivery services increase restaurant and fast-food accessibility in areas where there is already good accessibility (e.g., downtown Seattle for restaurants and South Seattle for fast-food). Given South Seattle is where low-income and low-access households concentrate, the increase in accessibility to fast-food may not be desired. Interestingly, with delivery services, more low-income or low-access households (those who live far from grocery stores) have better accessibility to fresh produce from grocery stores compared to the rest of the population. And the newly created Supplemental Nutrition Assistance Program (SNAP) online program appears to miss low-access households. These findings have important implications for policymakers and stakeholders seeking to improve food accessibility in urban areas through delivery services. 
    more » « less
  2. Peer produced goods, such as online knowledge bases and free/libre open source software rely on contributors who often choose their tasks regardless of consumer needs. These goods are susceptible to underproduction: when popular goods are relatively low quality. Although underproduction is a common feature of peer production, very little is known about how to counteract it. We use a detailed longitudinal dataset from English Wikipedia to show that more experienced contributors—including those who contribute without an account—tend to contribute to underproduced goods. A within-person analysis shows that contributors’ efforts shift toward underproduced goods over time. These findings illustrate the value of retaining contributors in peer production, including those contributing without accounts, as a means to counter underproduction. 
    more » « less
  3. We are Indigenous knowledge holders, herbalists, academics, and community environmental health staff who have the honor and responsibility of curating and sharing knowledge about traditional plants and medicines. Together, we created the Native Plants and Foods Curriculum Portal. This online sharing platform increases accessibility to Indigenous knowledge that is vital to the wellbeing of our environment and all those within it. In this article, we tell our story to concurrently share the knowledge itself as well as the importance of respecting the knowledge and the responsibility of holding that knowledge. 
    more » « less
  4. 3D Computer-Aided Design (CAD) modeling is ubiquitous in mechanical engineering and design. Modern CAD models are programs that produce geometry and can be used to implement high-level geometric changes by modifying input parameters. While there has been a surge of recent interest in program-based tooling for the CAD domain, one fundamental problem remains unsolved. CAD programs pass geometric arguments to operations using references, which are queries that select elements from the constructed geometry according to programmer intent. The challenge is designing reference semantics that can express programmer intent across all geometric topologies achievable with model parameters, including topologies where the desired elements are not present. In current systems, both users and automated tools may create invalid models when parameters are changed, as references to geometric elements are lost or silently and arbitrarily switched. While existing CAD systems use heuristics to attempt to infer user intent in cases of this undefined behavior, this best-effort solution is not suitable for constructing automated tools to edit and optimize CAD programs. We analyze the failure modes of existing referencing schemes and formalize a set of criteria on which to evaluate solutions to the CAD referencing problem. In turn, we propose a domain-specific language that exposes references as a first-class language construct, using user-authored queries to introspect element history and define references safely over all paths. We give a semantics for fine-grained element lineage that can subsequently be queried; and show that our language meets the desired properties. Finally, we provide an implementation of a lineage-based referencing system in a 2.5D CAD kernel, demonstrating realistic referencing scenarios and illustrating how our system safely represents models that cause reference breakage in existing CAD systems. 
    more » « less
  5. Reusable symbolic evaluators are a key building block of solver-aided verification and synthesis tools. A reusable evaluator reduces the semantics of all paths in a program to logical constraints, and a client tool uses these constraints to formulate a satisfiability query that is discharged with SAT or SMT solvers. The correctness of the evaluator is critical to the soundness of the tool and the domain properties it aims to guarantee. Yet so far, the trust in these evaluators has been based on an ad-hoc foundation of testing and manual reasoning. This paper presents the first formal framework for reasoning about the behavior of reusable symbolic evaluators. We develop a new symbolic semantics for these evaluators that incorporates state merging. Symbolic evaluators use state merging to avoid path explosion and generate compact encodings. To accommodate a wide range of implementations, our semantics is parameterized by a symbolic factory, which abstracts away the details of merging and creation of symbolic values. The semantics targets a rich language that extends Core Scheme with assumptions and assertions, and thus supports branching, loops, and (first-class) procedures. The semantics is designed to support reusability, by guaranteeing two key properties: legality of the generated symbolic states, and the reducibility of symbolic evaluation to concrete evaluation. Legality makes it simpler for client tools to formulate queries, and reducibility enables testing of client tools on concrete inputs. We use the Lean theorem prover to mechanize our symbolic semantics, prove that it is sound and complete with respect to the concrete semantics, and prove that it guarantees legality and reducibility. To demonstrate the generality of our semantics, we develop Leanette, a reference evaluator written in Lean, and Rosette 4, an optimized evaluator written in Racket. We prove Leanette correct with respect to the semantics, and validate Rosette 4 against Leanette via solver-aided differential testing. To demonstrate the practicality of our approach, we port 16 published verification and synthesis tools from Rosette 3 to Rosette 4. Rosette 3 is an existing reusable evaluator that implements the classic merging semantics, adopted from bounded model checking. Rosette 4 replaces the semantic core of Rosette 3 but keeps its optimized symbolic factory. Our results show that Rosette 4 matches the performance of Rosette 3 across a wide range of benchmarks, while providing a cleaner interface that simplifies the implementation of client tools. 
    more » « less
  6. Many compilers, synthesizers, and theorem provers rely on rewrite rules to simplify expressions or prove equivalences. Developing rewrite rules can be difficult: rules may be subtly incorrect, profitable rules are easy to miss, and rulesets must be rechecked or extended whenever semantics are tweaked. Large rulesets can also be challenging to apply: redundant rules slow down rule-based search and frustrate debugging. This paper explores how equality saturation, a promising technique that uses e-graphs toapplyrewrite rules, can also be used toinferrewrite rules. E-graphs can compactly represent the exponentially large sets of enumerated terms and potential rewrite rules. We show that equality saturation efficiently shrinks both sets, leading to faster synthesis of smaller, more general rulesets. We prototyped these strategies in a tool dubbed Ruler. Compared to a similar tool built on CVC4, Ruler synthesizes 5.8× smaller rulesets 25× faster without compromising on proving power. In an end-to-end case study, we show Ruler-synthesized rules which perform as well as those crafted by domain experts, and addressed a longstanding issue in a popular open source tool. 
    more » « less
  7. Library learningcompresses a given corpus of programs by extracting common structure from the corpus into reusable library functions. Prior work on library learning suffers from two limitations that prevent it from scaling to larger, more complex inputs. First, it explores too many candidate library functions that are not useful for compression. Second, it is not robust to syntactic variation in the input. We proposelibrary learning modulo theory(LLMT), a new library learning algorithm that additionally takes as input an equational theory for a given problem domain. LLMT uses e-graphs and equality saturation to compactly represent the space of programs equivalent modulo the theory, and uses a novele-graph anti-unificationtechnique to find common patterns in the corpus more directly and efficiently. We implemented LLMT in a tool named babble. Our evaluation shows that babble achieves better compression orders of magnitude faster than the state of the art. We also provide a qualitative evaluation showing that babble learns reusable functions on inputs previously out of reach for library learning. 
    more » « less
  8. We introduce SLANG.D, an extension to the Slang shading language that incorporates first-class automatic differentiation support. The new shading language allows us to transform a Direct3D-based path tracer to be fully differentiable with minor modifications to existing code. SLANG.D enables a shared ecosystem between machine learning frameworks and pre-existing graphics hardware API-based rendering systems, promoting the interchange of components and ideas across these two domains. Our contributions include a differentiable type system designed to ensure type safety and semantic clarity in codebases that blend differentiable and non-differentiable code, language primitives that automatically generate both forward and reverse gradient propagation methods, and a compiler architecture that generates efficient derivative propagation shader code for graphics pipelines. Our compiler supports differentiating code that involves arbitrary control-flow, dynamic dispatch, generics and higher-order differentiation, while providing developers flexible control of checkpointing and gradient aggregation strategies for best performance. Our system allows us to differentiate an existing real-time path tracer, Falcor, with minimal change to its shader code. We show that the compiler-generated derivative kernels perform as efficiently as handwritten ones. In several benchmarks, the SLANG.D code achieves significant speedup when compared to prior automatic differentiation systems. 
    more » « less
  9. In studies of misinformation, the distinction between high- and low-credibility publishers is fundamental. However, there is much that we do not know about the relationship between the subject matter and timing of content produced by the two types of publishers. By analyzing the content of several million unique articles published over 28 months, we show that high- and low-credibility publishers operate in distinct news ecosystems. Bursts of news coverage generated by the two types of publishers tend to cover different subject matter at different times, even though fluctuations in their overall news production tend to be highly correlated. Regardless of the mechanism, temporally convergent coverage among low-credibility publishers has troubling implications for American news consumers. 
    more » « less
  10. An e-graph efficiently represents a congruence relation over many expressions. Although they were originally developed in the late 1970s for use in automated theorem provers, a more recent technique known as equality saturation repurposes e-graphs to implement state-of-the-art, rewrite-driven compiler optimizations and program synthesizers. However, e-graphs remain unspecialized for this newer use case. Equality saturation workloads exhibit distinct characteristics and often require ad-hoc e-graph extensions to incorporate transformations beyond purely syntactic rewrites. This work contributes two techniques that make e-graphs fast and extensible, specializing them to equality saturation. A new amortized invariant restoration technique called rebuilding takes advantage of equality saturation's distinct workload, providing asymptotic speedups over current techniques in practice. A general mechanism called e-class analyses integrates domain-specific analyses into the e-graph, reducing the need for ad hoc manipulation. We implemented these techniques in a new open-source library called egg. Our case studies on three previously published applications of equality saturation highlight how egg's performance and flexibility enable state-of-the-art results across diverse domains. 
    more » « less